107 research outputs found
A Bibliometric Survey on the Reliable Software Delivery Using Predictive Analysis
Delivering a reliable software product is a fairly complex process, which involves proper coordination from the various teams in planning, execution, and testing for delivering software. Most of the development time and the software budget\u27s cost is getting spent finding and fixing bugs. Rework and side effect costs are mostly not visible in the planned estimates, caused by inherent bugs in the modified code, which impact the software delivery timeline and increase the cost. Artificial intelligence advancements can predict the probable defects with classification based on the software code changes, helping the software development team make rational decisions. Optimizing the software cost and improving the software quality is the topmost priority of the industry to remain profitable in the competitive market. Hence, there is a great urge to improve software delivery quality by minimizing defects and having reasonable control over predicted defects. This paper presents the bibliometric study for Reliable Software Delivery using Predictive analysis by selecting 450 documents from the Scopus database, choosing keywords like software defect prediction, machine learning, and artificial intelligence. The study is conducted for a year starting from 2010 to 2021. As per the survey, it is observed that Software defect prediction achieved an excellent focus among the researchers. There are great possibilities to predict and improve overall software product quality using artificial intelligence techniques
AHP validated literature review of forgery type dependent passive image forgery detection with explainable AI
Nowadays, a lot of significance is given to what we read today: newspapers, magazines, news channels, and internet media, such as leading social networking sites like Facebook, Instagram, and Twitter. These are the primary wellsprings of phony news and are frequently utilized in malignant manners, for example, for horde incitement. In the recent decade, a tremendous increase in image information generation is happening due to the massive use of social networking services. Various image editing software like Skylum Luminar, Corel PaintShop Pro, Adobe Photoshop, and many others are used to create, modify the images and videos, are significant concerns. A lot of earlier work of forgery detection was focused on traditional methods to solve the forgery detection. Recently, Deep learning algorithms have accomplished high-performance accuracies in the image processing domain, such as image classification and face recognition. Experts have applied deep learning techniques to detect a forgery in the image too. However, there is a real need to explain why the image is categorized under forged to understand the algorithm’s validity; this explanation helps in mission-critical applications like forensic. Explainable AI (XAI) algorithms have been used to interpret a black box’s decision in various cases. This paper contributes a survey on image forgery detection with deep learning approaches. It also focuses on the survey of explainable AI for images
An improved approach of FP-Growth tree for Frequent Itemset Mining using Partition Projection and Parallel Projection Techniques
In Data mining, it is about analyzing data; about extracting information out of data. It is a very actual as well as interesting issue having more and more data stored in database. The most important usage: customer behavior in market purchasing, shopping cart processed information provide, management of campaign , customer relationship management, mining about web usage called web mining, mining of text. In the current age of science we developed such technology by using it each type of data related to anything such like person, place, shop, or any organization can be stored. By analysis it is found that FP-growth is efficient in terms of tree construction as compared to Apriori and Tree Projection. Tree Projection is faster and more scalable than Apriori. The parallel projection technique is proved to be more scalable than partition projection as partition projection saves memory space as it works well for the dataset which is dispersed, if the FP-growth tree algorithm and Tree Projection are compared on the basis of benefits it holds on, Apriori does not result to be convenient enough. The pros of FP-growth as compared to Apriori concludes to be transparent as the datasets which it contains has an enormous number of combinations of short-narrative frequent patterns. FP-growth tree implemented along with projection techniques i.e. Partition projection technique constructed to reduce execution time for constructing FP-Growth tree has to be carried out
Transfer Learning for Real-time Deployment of a Screening Tool for Depression Detection Using Actigraphy
Automated depression screening and diagnosis is a highly relevant problem
today. There are a number of limitations of the traditional depression
detection methods, namely, high dependence on clinicians and biased
self-reporting. In recent years, research has suggested strong potential in
machine learning (ML) based methods that make use of the user's passive data
collected via wearable devices. However, ML is data hungry. Especially in the
healthcare domain primary data collection is challenging. In this work, we
present an approach based on transfer learning, from a model trained on a
secondary dataset, for the real time deployment of the depression screening
tool based on the actigraphy data of users. This approach enables machine
learning modelling even with limited primary data samples. A modified version
of leave one out cross validation approach performed on the primary set
resulted in mean accuracy of 0.96, where in each iteration one subject's data
from the primary set was set aside for testing.Comment: 5 pages, 4 figures, conference, to be published in UKSIM2
A Bibliometric Survey of Smart Wearable in the Health Insurance Industry
Smart wearables help real-time and remote monitoring of health data for effective diagnostic and preventive health care services. Wearable devices have the ability to track and monitor healthcare vitals such as heart rate, physical activities, BMI (Body Mass Index), blood pressure, and keeps an individual notified about the health status. Artificial Intelligence-enabled wearables show an ability to transform the health insurance sector. This would not only enable self-management of individual health but also help them focus from treatments to the preventions of health hazards. With this customer-centric approach to health care, it will enable the insurance companies to track the health behaviour of the individuals. This can perhaps lead to better incentivization models with a lower premium to the health-centric customers. Health insurance companies can have better outreach with these customer-centric products. The area is exceptionally novel and shows potential for the research opportunities. Although the literature shows the presence of few works incepting the application of smart wearables in health insurance, it was found that the works are across sections of the society and extremely limited to regions and boundaries. Thus, a need for Bibliometric survey in the area of Smart Wearables in Health insurance is necessary to track the research trends, progress and scope of the future research. This paper conducts Bibliometric study for “Smart Wearables in Health Insurance Industry” by extracting documents of total 287 from Scopus database using keywords like wearables, health insurance, health care, machine learning and health risk prediction. The study is conducted since the last decade that is 2011-2020 for the research analysis. From the study, it is observed that application of wearables in health insurance are in a nascent stage and there is a scope for researchers, insurance, health care stakeholders to explore the used cases for a better user experience
In Rain or Shine: Understanding and Overcoming Dataset Bias for Improving Robustness Against Weather Corruptions for Autonomous Vehicles
Several popular computer vision (CV) datasets, specifically employed for
Object Detection (OD) in autonomous driving tasks exhibit biases due to a range
of factors including weather and lighting conditions. These biases may impair a
model's generalizability, rendering it ineffective for OD in novel and unseen
datasets. Especially, in autonomous driving, it may prove extremely high risk
and unsafe for the vehicle and its surroundings. This work focuses on
understanding these datasets better by identifying such "good-weather" bias.
Methods to mitigate such bias which allows the OD models to perform better and
improve the robustness are also demonstrated. A simple yet effective OD
framework for studying bias mitigation is proposed. Using this framework, the
performance on popular datasets is analyzed and a significant difference in
model performance is observed. Additionally, a knowledge transfer technique and
a synthetic image corruption technique are proposed to mitigate the identified
bias. Finally, using the DAWN dataset, the findings are validated on the OD
task, demonstrating the effectiveness of our techniques in mitigating
real-world "good-weather" bias. The experiments show that the proposed
techniques outperform baseline methods by averaged fourfold improvement.Comment: Under revie
- …